Goto

Collaborating Authors

 remark 3






Error Analysis of Bayesian Inverse Problems with Generative Priors

Hosseini, Bamdad, Huang, Ziqi

arXiv.org Machine Learning

Data-driven methods for the solution of inverse problems have become widely popular in recent years thanks to the rise of machine learning techniques. A popular approach concerns the training of a generative model on additional data to learn a bespoke prior for the problem at hand. In this article we present an analysis for such problems by presenting quantitative error bounds for minimum Wasserstein-2 generative models for the prior. We show that under some assumptions, the error in the posterior due to the generative prior will inherit the same rate as the prior with respect to the Wasserstein-1 distance. We further present numerical experiments that verify that aspects of our error analysis manifests in some benchmarks followed by an elliptic PDE inverse problem where a generative prior is used to model a non-stationary field.


Concentration bounds for intrinsic dimension estimation using Gaussian kernels

Andersson, Martin

arXiv.org Machine Learning

We prove finite-sample concentration and anti-concentration bounds for dimension estimation using Gaussian kernel sums. Our bounds provide explicit dependence on sample size, bandwidth, and local geometric and distributional parameters, characterizing precisely how regularity conditions govern statistical performance. We also propose a bandwidth selection heuristic using derivative information, which shows promise in numerical experiments.


we will extend the submission with discussions from below. 2

Neural Information Processing Systems

We thank the reviewers for their insightful comments. In this rebuttal, we respond to remarks from reviews. Remark 1 The work lacks discussion about the comparison of interpretability with BSP-Net. Moreover, their CSG structure is fixed by definition. CSG trees for different instances (see Figure on the right). Remark 2 Only a single instance of CSG visualization for each class is shown.



[R2, R3] Amount of augmented data and sample efficiency

Neural Information Processing Systems

R3 asked why more CoDA samples don't always increase performance. This is all we meant by Remark 3.1: that within We agree our "intuitive" explanation of minimality might mislead in the way RL/Causal literatures, we show a broad application of causal techniques yielding empirical sample efficiency in RL. The "mental ignorance" comment at the end of Remark B's actual thoughts (but not agent A's belief about agent B's thoughts), and other true facts that agent A is ignorant of. We did not try the delta state trick; this is a helpful suggestion (thanks!) that we Note that CoDA + MBPO were complementary in the Batch RL case.


We would like to thank all reviewers for their comments and questions

Neural Information Processing Systems

We would like to thank all reviewers for their comments and questions. We appreciate your recommendation about reordering the paper. Y our example is correct. We will rephrase this remark to make it more clear. Nguyen et al. [25] recognize the fact that for a strongly-convex Hence, like Nguyen et al. [25], Theorem 1 (iii) avoids the incompatible bounded-gradients assumption